Convergence diagnostics for stochastic gradient descent with constant step size
نویسندگان
چکیده
Iterative procedures in stochastic optimization are typically comprised of a transient phase and a stationary phase. During the transient phase the procedure converges towards a region of interest, and during the stationary phase the procedure oscillates in a convergence region, commonly around a single point. In this paper, we develop a statistical diagnostic test to detect such phase transition in the context of stochastic gradient descent with constant step size. We present theoretical and experimental results suggesting that the diagnostic behaves as intended, and the region where the diagnostic is activated coincides with the convergence region. For a class of loss functions, we derive a closed-form solution describing such region, and support this theoretical result with simulated experiments. Finally, we suggest an application to speed up convergence of stochastic gradient descent by halving the learning rate every time convergence is detected. This leads to remarkable speed gains that are empirically comparable to state-of-art procedures.
منابع مشابه
Fast Convergence of Stochastic Gradient Descent under a Strong Growth Condition
We consider optimizing a function smooth convex function f that is the average of a set of differentiable functions fi, under the assumption considered by Solodov [1998] and Tseng [1998] that the norm of each gradient f ′ i is bounded by a linear function of the norm of the average gradient f . We show that under these assumptions the basic stochastic gradient method with a sufficiently-small c...
متن کاملBridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
Abstract: We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size. While the detailed analysis was only performed for quadratic functions, we provide an explicit asymptotic expansion of the moments of the averaged SGD iterates that outlines the dependence on initial conditions, the...
متن کاملBarzilai-Borwein Step Size for Stochastic Gradient Descent
One of the major issues in stochastic gradient descent (SGD) methods is how to choose an appropriate step size while running the algorithm. Since the traditional line search technique does not apply for stochastic optimization algorithms, the common practice in SGD is either to use a diminishing step size, or to tune a fixed step size by hand. Apparently, these two approaches can be time consum...
متن کاملIE510 Term Paper: Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz Algorithm
In this paper, we mainly study the convergence properties of stochastic gradient descent (SGD) as described in Needell et al. [2]. The function to be minimized with SGD is assumed to be strongly convex. Also, its gradients are assumed to be Lipschitz continuous. First, we discuss the superior bound on convergence (of standard SGD) obtained by Needell et al. [2] as opposed to the previous work o...
متن کاملVariance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization
Here we study non-convex composite optimization: first, a finite-sum of smooth but non-convex functions, and second, a general function that admits a simple proximal mapping. Most research on stochastic methods for composite optimization assumes convexity or strong convexity of each function. In this paper, we extend this problem into the non-convex setting using variance reduction techniques, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1710.06382 شماره
صفحات -
تاریخ انتشار 2017